11 research outputs found

    Fourier ptychography: current applications and future promises

    Get PDF
    Traditional imaging systems exhibit a well-known trade-off between the resolution and the field of view of their captured images. Typical cameras and microscopes can either “zoom in” and image at high-resolution, or they can “zoom out” to see a larger area at lower resolution, but can rarely achieve both effects simultaneously. In this review, we present details about a relatively new procedure termed Fourier ptychography (FP), which addresses the above trade-off to produce gigapixel-scale images without requiring any moving parts. To accomplish this, FP captures multiple low-resolution, large field-of-view images and computationally combines them in the Fourier domain into a high-resolution, large field-of-view result. Here, we present details about the various implementations of FP and highlight its demonstrated advantages to date, such as aberration recovery, phase imaging, and 3D tomographic reconstruction, to name a few. After providing some basics about FP, we list important details for successful experimental implementation, discuss its relationship with other computational imaging techniques, and point to the latest advances in the field while highlighting persisting challenges

    Imaging dynamics beneath turbid media via parallelized single-photon detection

    Full text link
    Noninvasive optical imaging through dynamic scattering media has numerous important biomedical applications but still remains a challenging task. While standard methods aim to form images based upon optical absorption or fluorescent emission, it is also well-established that the temporal correlation of scattered coherent light diffuses through tissue much like optical intensity. Few works to date, however, have aimed to experimentally measure and process such data to demonstrate deep-tissue imaging of decorrelation dynamics. In this work, we take advantage of a single-photon avalanche diode (SPAD) array camera, with over one thousand detectors, to simultaneously detect speckle fluctuations at the single-photon level from 12 different phantom tissue surface locations delivered via a customized fiber bundle array. We then apply a deep neural network to convert the acquired single-photon measurements into video of scattering dynamics beneath rapidly decorrelating liquid tissue phantoms. We demonstrate the ability to record video of dynamic events occurring 5-8 mm beneath a decorrelating tissue phantom with mm-scale resolution and at a 2.5-10 Hz frame rate

    Transient motion classification through turbid volumes via parallelized single-photon detection and deep contrastive embedding

    Full text link
    Fast noninvasive probing of spatially varying decorrelating events, such as cerebral blood flow beneath the human skull, is an essential task in various scientific and clinical settings. One of the primary optical techniques used is diffuse correlation spectroscopy (DCS), whose classical implementation uses a single or few single-photon detectors, resulting in poor spatial localization accuracy and relatively low temporal resolution. Here, we propose a technique termed Classifying Rapid decorrelation Events via Parallelized single photon dEtection (CREPE)}, a new form of DCS that can probe and classify different decorrelating movements hidden underneath turbid volume with high sensitivity using parallelized speckle detection from a 32×3232\times32 pixel SPAD array. We evaluate our setup by classifying different spatiotemporal-decorrelating patterns hidden beneath a 5mm tissue-like phantom made with rapidly decorrelating dynamic scattering media. Twelve multi-mode fibers are used to collect scattered light from different positions on the surface of the tissue phantom. To validate our setup, we generate perturbed decorrelation patterns by both a digital micromirror device (DMD) modulated at multi-kilo-hertz rates, as well as a vessel phantom containing flowing fluid. Along with a deep contrastive learning algorithm that outperforms classic unsupervised learning methods, we demonstrate our approach can accurately detect and classify different transient decorrelation events (happening in 0.1-0.4s) underneath turbid scattering media, without any data labeling. This has the potential to be applied to noninvasively monitor deep tissue motion patterns, for example identifying normal or abnormal cerebral blood flow events, at multi-Hertz rates within a compact and static detection probe.Comment: Journal submissio

    Parallelized computational 3D video microscopy of freely moving organisms at multiple gigapixels per second

    Full text link
    To study the behavior of freely moving model organisms such as zebrafish (Danio rerio) and fruit flies (Drosophila) across multiple spatial scales, it would be ideal to use a light microscope that can resolve 3D information over a wide field of view (FOV) at high speed and high spatial resolution. However, it is challenging to design an optical instrument to achieve all of these properties simultaneously. Existing techniques for large-FOV microscopic imaging and for 3D image measurement typically require many sequential image snapshots, thus compromising speed and throughput. Here, we present 3D-RAPID, a computational microscope based on a synchronized array of 54 cameras that can capture high-speed 3D topographic videos over a 135-cm^2 area, achieving up to 230 frames per second at throughputs exceeding 5 gigapixels (GPs) per second. 3D-RAPID features a 3D reconstruction algorithm that, for each synchronized temporal snapshot, simultaneously fuses all 54 images seamlessly into a globally-consistent composite that includes a coregistered 3D height map. The self-supervised 3D reconstruction algorithm itself trains a spatiotemporally-compressed convolutional neural network (CNN) that maps raw photometric images to 3D topography, using stereo overlap redundancy and ray-propagation physics as the only supervision mechanism. As a result, our end-to-end 3D reconstruction algorithm is robust to generalization errors and scales to arbitrarily long videos from arbitrarily sized camera arrays. The scalable hardware and software design of 3D-RAPID addresses a longstanding problem in the field of behavioral imaging, enabling parallelized 3D observation of large collections of freely moving organisms at high spatiotemporal throughputs, which we demonstrate in ants (Pogonomyrmex barbatus), fruit flies, and zebrafish larvae

    High-speed multi-objective Fourier ptychographic microscopy

    Get PDF
    The ability of a microscope to rapidly acquire wide-field, high-resolution images is limited by both the optical performance of the microscope objective and the bandwidth of the detector. The use of multiple detectors can increase electronic-acquisition bandwidth, but the use of multiple parallel objectives is problematic since phase coherence is required across the multiple apertures. We report a new synthetic-aperture microscopy technique based on Fourier ptychography, where both the illumination and image-space numerical apertures are synthesized, using a spherical array of low-power microscope objectives that focus images onto mutually incoherent detectors. Phase coherence across apertures is achieved by capturing diffracted fields during angular illumination and using ptychographic reconstruction to synthesize wide-field, high-resolution, amplitude and phase images. Compared to conventional Fourier ptychography, the use of multiple objectives reduces image acquisition times by increasing the area for sampling the diffracted field. We demonstrate the proposed scaleable architecture with a nine-objective microscope that generates an 89-megapixel, 1.1 µm resolution image nine-times faster than can be achieved with a single-objective Fourier-ptychographic microscope. New calibration procedures and reconstruction algorithms enable the use of low-cost 3D-printed components for longitudinal biological sample imaging. Our technique offers a route to high-speed, gigapixel microscopy, for example, imaging the dynamics of large numbers of cells at scales ranging from sub-micron to centimetre, with an enhanced possibility to capture rare phenomena

    Low-cost, sub-micron resolution, wide-field computational microscopy using opensource hardware

    No full text
    The revolution in low-cost consumer photography and computation provides fertile opportunity for a disruptive reduction in the cost of biomedical imaging. Conventional approaches to low-cost microscopy are fundamentally restricted, however, to modest field of view (FOV) and/or resolution. We report a low-cost microscopy technique, implemented with a Raspberry Pi single-board computer and color camera combined with Fourier ptychography (FP), to computationally construct 25-megapixel images with sub-micron resolution. New image-construction techniques were developed to enable the use of the low-cost Bayer color sensor, to compensate for the highly aberrated re-used camera lens and to compensate for misalignments associated with the 3D-printed microscope structure. This high ratio of performance to cost is of particular interest to high-throughput microscopy applications, ranging from drug discovery and digital pathology to health screening in low-income countries. 3D models and assembly instructions of our microscope are made available for open source use

    Imaging Dynamics Beneath Turbid Media via Parallelized Single-Photon Detection

    No full text
    Noninvasive optical imaging through dynamic scattering media has numerous important biomedical applications but still remains a challenging task. While standard diffuse imaging methods measure optical absorption or fluorescent emission, it is also well-established that the temporal correlation of scattered coherent light diffuses through tissue much like optical intensity. Few works to date, however, have aimed to experimentally measure and process such temporal correlation data to demonstrate deep-tissue video reconstruction of decorrelation dynamics. In this work, a single-photon avalanche diode array camera is utilized to simultaneously monitor the temporal dynamics of speckle fluctuations at the single-photon level from 12 different phantom tissue surface locations delivered via a customized fiber bundle array. Then a deep neural network is applied to convert the acquired single-photon measurements into video of scattering dynamics beneath rapidly decorrelating tissue phantoms. The ability to reconstruct images of transient (0.1–0.4 s) dynamic events occurring up to 8 mm beneath a decorrelating tissue phantom with millimeter-scale resolution is demonstrated, and it is highlighted how the model can flexibly extend to monitor flow speed within buried phantom vessels
    corecore